23,445 research outputs found

    Earthquake source parameters of the 2009 Mw 7.8 Fiordland (New Zealand) earthquake from L-band InSAR observations

    Get PDF
    The 2009 MW7.8 Fiordland (New Zealand) earthquake is the largest to have occurred in New Zealand since the 1931 Mw 7.8 Hawke’s Bay earthquake, 1 000 km to the northwest. In this paper two tracks of ALOS PALSAR interferograms (one ascending and one descending) are used to determine fault geometry and slip distribution of this large earthquake. Modeling the event as dislocation in an elastic half-space suggests that the earthquake resulted from slip on a SSW-NNE orientated thrust fault that is associated with the subduction between the Pacific and Australian Plates, with oblique displacement of up to 6.3 m. This finding is consistent with the preliminary studies undertaken by the USGS using seismic data

    Inclusion agglomeration in electrified molten metal: thermodynamic consideration

    Get PDF
    The effect of electric current on inclusion agglomeration in molten metal has been investigated. It is found that the agglomeration is dependent on the electric current density, distance between inclusions and orientation of electric field. Electric current retards the agglomeration unless two inclusions are aligned along or closely to the current flow streamlines and the distance between inclusions is less than a critical value. The mechanism is also validated in the computation of cluster agglomeration. The numerical results provide a comprehensive indication for the current-induced inclusion removal and current-induced inclusion elongation. When the inclusions are in long-thin shape, the calculation predicts the current-induced microstructure alignment and current-induced microstructure refinement phenomena

    A Body-Nonlinear Green's Function Method with Viscous Dissipation Effects for Large-Amplitude Roll of Floating Bodies

    Get PDF
    A novel time-domain body-nonlinear Green’s function method is developed for evaluating large-amplitude roll damping of two-dimensional floating bodies with consideration of viscous dissipation effects. In the method, the instantaneous wetted surface of floating bodies is accurately considered, and the viscous dissipation effects are taken into account based on the “fairly perfect fluid” model. As compared to the method based on the existing inviscid body-nonlinear Green’s function, the newly proposed method can give a more accurate damping coefficient of floating bodies rolling on the free surface with large amplitudes according to the numerical tests and comparison with experimental data for a few cases related to ship hull sections with bilge keels

    Multiple costs and their combination in cost sensitive learning

    Full text link
    University of Technology, Sydney. Faculty of Information Technology.Cost sensitive learning is firstly defined as a procedure of minimizing the costs of classification errors. It has attracted much attention in the last few years. Being cost sensitive has the strength to handle the unbalance on the misclassification errors in some real world applications. Recently, researchers have considered how to deal with two or more costs in a model, such as involving both of the misclassification costs (the cost for misclassification errors) and attribute test costs (the cost incurs as obtaining the attribute’s value) [Tur95, GGR02, LYWZ04], Cost sensitive learning involving both attribute test costs and misclassification costs is called test cost sensitive learning that is more close to real industry focus, such as medical research and business decision. Current test cost sensitive learning aims to find an optimal diagnostic policy (simply, a policy) with minimal expected sum of the misclassification cost and test cost that specifies, for example which attribute test is performed in next step based on the outcomes of previous attribute tests, and when the algorithm stops (by choosing to classify). A diagnostic policy takes the form of a decision tree whose nodes specify tests and whose leaves specify classification actions. A challenging issue is the choice of a reasonable one from all possible policies. This dissertation argues for considering both of the test cost and misclassification cost, or even more costs together, but doubts if the current way, summing up the two costs, is the only right way. Detailed studies are needed to ensure the ways of combination make sense and be “correct”, dimensionally as well as semantically. This dissertation studies fundamental properties of costs involved and designs new models to combine the costs together. Some essential properties of attribute test cost are studied. In our learning problem definition, test cost is combined into misclassification cost by choosing and performing proper tests for a better decision. Why do you choose them and how about the ones that are not chosen? Very often, only part of all attribute values are enough for making a decision and rest attributes are left as “unknown”. The values are defined as ‘absent values' as they are left as unknown purposely for some rational reasons when the information obtained is considered as enough, or when patients have no money enough to perform further tests, and so on.. This is the first work to utilize the information hidden in those “absent values” in cost sensitive learning; and the conclusion is very positive, i.e. “Absent data” is useful for decision making. The “absent values” are usually treated as ‘missing values' when left as known for unexpected reasons. This thesis studies the difference between ‘absent’ and ‘missing’. An algorithm based on lazy decision tree is proposed to identify the absent data from missing data, and a novel strategy is proposed to help patch the “real” missing values. . Two novel test cost sensitive models are designed for different real work scenarios. The first model is a general test cost sensitive learning framework with multiple cost scales. Previous works assume that the test cost and the misclassification cost must be defined on the same cost scale, such as the dollar cost incurred in a medical diagnosis. And they aim to minimize the sum of the misclassification cost and the test cost. However, costs may be measured in very different units and we may meet difficulty in defining the multiple costs on the same cost scale. It is not only a technology issue, but also a social issue. In medical diagnosis, how much money should you assign for a misclassification cost? Sometimes, a misclassification may hurt a patient’s life. And from a social point of view, life is invaluable. To tackle this issue, a target-resource budget learning framework with multiple costs is proposed. With this framework, we present a test cost sensitive decision tree model with two kinds of cost scales. The task is to minimize one cost scale, called target cost, and keep the other one within specified budgets. To the best of our knowledge, this is the first attempt to study the cost sensitive learning with multiple costs scales. The second model is based on the assumption that some attributes of an unlabeled example are known before being classified. A test cost sensitive lazy tree model is proposed to utilize the known information to reduce the overall cost. We also modify and apply this model to the batch-test problem: multiple tests are chosen and done in one shot, rather than in a sequential manner in the test-sensitive tree. It is significant in some diagnosis applications that require a decision to be made as soon as possible, such as emergency treatment. Extensive experiments are conducted for evaluating the proposed approaches, and demonstrate that the work in this dissertation is efficient and useful for many diagnostic tasks involving target cost minimization and resource utilization for obtaining missing information

    Distinguishing RBL-like objects and XBL-like objects with the peak emission frequency of the overall energy spectrum

    Full text link
    We investigate quantitatively how the peak emission frequency of the overall energy spectrum is at work in distinguishing RBL-like and XBL-like objects. We employ the sample of Giommi et al. (1995) to study the distribution of BL Lacertae objects with various locations of the cutoff of the overall energy spectrum. We find that the sources with the cutoff located at lower frequency are indeed sited in the RBL region of the αro−αox\alpha_{ro}-\alpha_{ox} plane, while those with the cutoff located at higher frequency are distributed in the XBL region. For a more quantitative study, we employ the BL Lacertae samples presented by Sambruna et al. (1996), where, the peak emission frequency, Îœp\nu _p, of each source is estimated by fitting the data with a parabolic function. In the plot of αrx−logâĄÎœp\alpha_{rx}-\log \nu_p we find that, in the four different regions divided by the αrx=0.75\alpha_{rx}=0.75 line and the logâĄÎœp=14.7\log \nu_p=14.7 line, all the RBL-like objects are inside the upper left region, while most XBL-like objects are within the lower right region. A few sources are located in the lower left region. No sources are in the upper right region. This result is rather quantitative. It provides an evidence supporting what Giommi et al. (1995) suggested: RBL-like and XBL-like objects can be distinguished by the difference of the peak emission frequency of the overall energy spectrum.Comment: 7 pages, 2 figure
    • 

    corecore